Skip to content

Conversation

vivekgoe
Copy link

No description provided.

@vivekgoe
Copy link
Author

/run-gaudi-tests

@vivekgoe vivekgoe force-pushed the lora branch 2 times, most recently from 029856e to 9d92fb2 Compare August 4, 2025 08:08
@vivekgoe
Copy link
Author

vivekgoe commented Aug 4, 2025

/run-gaudi-tests

@sys-hab-pt-service
Copy link
Collaborator

Only codeowners can request to run Gaudi tests. Contact list: kzawora-intel, xuechendi, mswiniarsk, adobrzyn

@vivekgoe vivekgoe marked this pull request as ready for review August 4, 2025 10:12
@vivekgoe
Copy link
Author

vivekgoe commented Aug 4, 2025

Dependent on vllm-project/vllm#21923 merge.

@vivekgoe vivekgoe changed the title Draft: Add support for LoRA Add support for LoRA Aug 4, 2025
@vivekgoe vivekgoe force-pushed the lora branch 2 times, most recently from a9c2b75 to 385d861 Compare August 7, 2025 07:24
@adobrzyn
Copy link
Collaborator

adobrzyn commented Aug 7, 2025

/run-gaudi-tests

@xuechendi
Copy link
Collaborator

@vivekgoe , please add CI trigger in either jenkins folder or tests/full_tests

@adobrzyn
Copy link
Collaborator

adobrzyn commented Aug 8, 2025

/run-gaudi-tests

@vivekgoe
Copy link
Author

@xuechendi I am not familiar with adding CI triggers, is there any example which you can share which I can follow?
@kzawora-intel @adobrzyn @mswiniarsk @xuechendi Please review PR. @kzawora-intel I need your help to figure out why LoRA unit-tests keep failing in picking model from /weka in CI runs, I do not see issue seen in CI in my local runs.

@xuechendi
Copy link
Collaborator

@adobrzyn, how do we suggest for CI, should we add to Jenkins folder - https://github.com/vllm-project/vllm-gaudi/blob/main/.jenkins/test_config.yaml
@vivekgoe , may you check with Lora CI is covered in above .jenkins/test_config.yaml file, this is migrated from vllm-fork

@vivekgoe
Copy link
Author

@xuechendi LoRA related tests in vllm-fork test_config.yaml have not been migrated to vllm-gaudi jenkins config file. Also unit-tests here are triggered in a different way, from here https://github.com/HabanaAI/vllm-fork/blob/42c53f28da01c39fd4eeffbae79f1c8210c41d47/.jenkins/test_config.yaml#L172 . If I understand correctly, any new test added to unit-tests directory will get automatically picked which is happening but lora test is failing on model loading.

@adobrzyn
Copy link
Collaborator

Please resolve conficts with main

@vivekgoe
Copy link
Author

@adobrzyn @michalkuligowski I will rebase this tomorrow and then will need your help to trigger CI again.

@vivekgoe
Copy link
Author

@adobrzyn @michalkuligowski rebased PR, also checked that 2 new unit-tests added for LoRA pass locally. Please retrigger CI and help review the changes.

@adobrzyn
Copy link
Collaborator

/run-gaudi-tests

Remove dependency on LoRA worker class

First working version with simple example

Fixed BS>1 case

Fix in platform.py to avoid error due to missing vllm_config

Fix No LoRA case

Fix warmup with LoRA

Minor Cleanup

Disable HPU Graphs

Clean-up. Minor fixes

Signed-off-by: Vivek <[email protected]>

Add LoRA unit-test

Signed-off-by: Vivek <[email protected]>

Move LoRA configuration code to separate function

Signed-off-by: Vivek <[email protected]>

Add Multilora test

Signed-off-by: Vivek <[email protected]>

Fix mypy error

Signed-off-by: Vivek <[email protected]>

Update hpu_lora to use patching

Signed-off-by: Vivek <[email protected]>

Fix for model load error in CI

Signed-off-by: Vivek <[email protected]>
@adobrzyn
Copy link
Collaborator

/run-gaudi-tests

@adobrzyn
Copy link
Collaborator

/run-gaudi-tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants